21 research outputs found

    Autonomous Satellite Operations Via Secure Virtual Mission Operations Center

    Get PDF
    The science community is interested in improving their ability to respond to rapidly evolving, transient phenomena via autonomous rapid reconfiguration, which derives from the ability to assemble separate but collaborating sensors and data forecasting systems to meet a broad range of research and application needs. Current satellite systems typically require human intervention to respond to triggers from dissimilar sensor systems. Additionally, satellite ground services often need to be coordinated days or weeks in advance. Finally, the boundaries between the various sensor systems that make up such a Sensor Web are defined by such things as link delay and connectivity, data and error rate asymmetry, data reliability, quality of service provisions, and trust, complicating autonomous operations. Over the past ten years, researchers from the NASA Glenn Research Center (GRC), General Dynamics, Surrey Satellite Technology Limited (SSTL), Cisco, Universal Space Networks (USN), the U.S. Geological Survey (USGS), the Naval Research Laboratory, the DoD Operationally Responsive Space (ORS) Office, and others have worked collaboratively to develop a virtual mission operations capability. Called VMOC (Virtual Mission Operations Center), this new capability allows cross-system queuing of dissimilar mission unique systems through the use of a common security scheme and published application programming interfaces (APIs). Collaborative VMOC demonstrations over the last several years have supported the standardization of spacecraft to ground interfaces needed to reduce costs, maximize space effects to the user, and allow the generation of new tactics, techniques and procedures that lead to responsive space employment

    Space Station Freedom ground data system: Design and operations

    Get PDF
    Over the previous year the Space Station Freedom (SSF) Program (SSFP) ground data distribution system has become independent of a number of data systems that were to have been provided by other National Aeronautics and Space Administration (NASA) programs. Consequently, the SSFP has outlined the basic architecture of a new data system dedicated to supporting SSF requirements. This has been accomplished through a complete redesign of the ground network and a reallocation of selected functions. There are a number of aspects of the new ground data distribution system that are unique among NASA programs. These considerations make SSF ground data distribution one of the most extensive and complex data management challenges encountered in the arena of Space Operations. A description of this system comprises the main focus of the paper

    NASA’s Earth Science Technology Office CubeSats for Technology Maturation

    Get PDF
    NASA\u27s Earth Science Technology Office (ESTO) has been supporting the development of multiple CubeSats to advance various technologies for future Earth Science observations. The goal of this work is to support instrument and information systems technology risk reduction, through flight validation in the space environment, in support of the Earth Science Decadal Survey. Within the next 18 months three CubeSats will have completed system development and testing. Two will launch on GEMSat L-39 planned no earlier than December 2013 while the third will launch no earlier than October 2014. MCubed/COVE-2 (a reflight mission) will take mid-resolution images of the Earth at approximately 200m per pixel while carrying the COVE payload. COVE will validate a real-time high data rate image processing algorithm utilizing the radiation-hardened, space-grade Virtex-5QV FPGA by Xilinx. This is a key capability for the Multiangle Spectropolarimetric Imager (MSPI) instrument planned for the ACE Decadal Survey mission concept. The IPEX CubeSat will validate autonomous science and product delivery technologies demonstrating a twenty-times reduction in data volume for low-latency near real-time product generation. This technology supports the proposed HyspIRI mission concept VSWIR spectrometer and thermal IR imager. Finally, GRIFEX will perform engineering assessments of a state-of-the-art all digital in-pixel high frame rate Read-Out Integrated Circuit (ROIC). Its high throughput capacity will enable the GEO-CAPE mission concept to make hourly high spatial and spectral resolution measurements of rapidly changing atmospheric chemistry and pollution with the Panchromatic Fourier Transform Spectrometer (PanFTS) instrument

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems that facilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment.This document describes the conceptual design for the Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE). The goals of the experiment include 1) studying neutrino oscillations using a beam of neutrinos sent from Fermilab in Illinois to the Sanford Underground Research Facility (SURF) in Lead, South Dakota, 2) studying astrophysical neutrino sources and rare processes and 3) understanding the physics of neutrino interactions in matter. We describe the development of the computing infrastructure needed to achieve the physics goals of the experiment by storing, cataloging, reconstructing, simulating, and analyzing ∌\sim 30 PB of data/year from DUNE and its prototypes. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions and advanced algorithms as HEP computing evolves. We describe the physics objectives, organization, use cases, and proposed technical solutions
    corecore